hello
hello
Labels

📌S Retain class distribution for seed 2:
Class 0: 4500
Class 1: 4500
Class 2: 4500
Class 3: 4500
Class 4: 4500
Class 5: 4500
Class 6: 4500
Class 7: 4500
Class 8: 4500
Class 9: 4500

📌S Forget class distribution for seed 2:
Class 0: 500
Class 1: 500
Class 2: 500
Class 3: 500
Class 4: 500
Class 5: 500
Class 6: 500
Class 7: 500
Class 8: 500
Class 9: 500
78090990

📊 Updated class distribution:
Retain set:
  Class 0: 4875
  Class 1: 4875
  Class 2: 4875
  Class 3: 4875
  Class 4: 4875
  Class 5: 4875
  Class 6: 4875
  Class 7: 4875
  Class 8: 4875
  Class 9: 4875
Forget set:
  Class 0: 125
  Class 1: 125
  Class 2: 125
  Class 3: 125
  Class 4: 125
  Class 5: 125
  Class 6: 125
  Class 7: 125
  Class 8: 125
  Class 9: 125
hello
hello
⚠️ Warning: Retain train loader may not be shuffled.
Training Epoch: 1 [256/48750]	Loss: 2.4253	LR: 0.000000
Training Epoch: 1 [512/48750]	Loss: 2.4555	LR: 0.000524
Training Epoch: 1 [768/48750]	Loss: 2.3948	LR: 0.001047
Training Epoch: 1 [1024/48750]	Loss: 2.3893	LR: 0.001571
Training Epoch: 1 [1280/48750]	Loss: 2.3208	LR: 0.002094
Training Epoch: 1 [1536/48750]	Loss: 2.2977	LR: 0.002618
Training Epoch: 1 [1792/48750]	Loss: 2.2225	LR: 0.003141
Training Epoch: 1 [2048/48750]	Loss: 2.2129	LR: 0.003665
Training Epoch: 1 [2304/48750]	Loss: 2.1714	LR: 0.004188
Training Epoch: 1 [2560/48750]	Loss: 2.2710	LR: 0.004712
Training Epoch: 1 [2816/48750]	Loss: 2.3678	LR: 0.005236
Training Epoch: 1 [3072/48750]	Loss: 2.2127	LR: 0.005759
Training Epoch: 1 [3328/48750]	Loss: 2.1486	LR: 0.006283
Training Epoch: 1 [3584/48750]	Loss: 1.9864	LR: 0.006806
Training Epoch: 1 [3840/48750]	Loss: 2.0635	LR: 0.007330
Training Epoch: 1 [4096/48750]	Loss: 1.9605	LR: 0.007853
Training Epoch: 1 [4352/48750]	Loss: 1.8925	LR: 0.008377
Training Epoch: 1 [4608/48750]	Loss: 1.9716	LR: 0.008901
Training Epoch: 1 [4864/48750]	Loss: 2.0786	LR: 0.009424
Training Epoch: 1 [5120/48750]	Loss: 1.8410	LR: 0.009948
Training Epoch: 1 [5376/48750]	Loss: 1.7812	LR: 0.010471
Training Epoch: 1 [5632/48750]	Loss: 1.8177	LR: 0.010995
Training Epoch: 1 [5888/48750]	Loss: 1.8416	LR: 0.011518
Training Epoch: 1 [6144/48750]	Loss: 1.8883	LR: 0.012042
Training Epoch: 1 [6400/48750]	Loss: 1.8314	LR: 0.012565
Training Epoch: 1 [6656/48750]	Loss: 1.7804	LR: 0.013089
Training Epoch: 1 [6912/48750]	Loss: 1.8048	LR: 0.013613
Training Epoch: 1 [7168/48750]	Loss: 1.7822	LR: 0.014136
Training Epoch: 1 [7424/48750]	Loss: 1.7284	LR: 0.014660
Training Epoch: 1 [7680/48750]	Loss: 1.5992	LR: 0.015183
Training Epoch: 1 [7936/48750]	Loss: 1.6597	LR: 0.015707
Training Epoch: 1 [8192/48750]	Loss: 1.5678	LR: 0.016230
Training Epoch: 1 [8448/48750]	Loss: 1.6922	LR: 0.016754
Training Epoch: 1 [8704/48750]	Loss: 1.6248	LR: 0.017277
Training Epoch: 1 [8960/48750]	Loss: 1.5764	LR: 0.017801
Training Epoch: 1 [9216/48750]	Loss: 1.6099	LR: 0.018325
Training Epoch: 1 [9472/48750]	Loss: 1.5590	LR: 0.018848
Training Epoch: 1 [9728/48750]	Loss: 1.6113	LR: 0.019372
Training Epoch: 1 [9984/48750]	Loss: 1.5523	LR: 0.019895
Training Epoch: 1 [10240/48750]	Loss: 1.6003	LR: 0.020419
Training Epoch: 1 [10496/48750]	Loss: 1.5432	LR: 0.020942
Training Epoch: 1 [10752/48750]	Loss: 1.5066	LR: 0.021466
Training Epoch: 1 [11008/48750]	Loss: 1.6380	LR: 0.021990
Training Epoch: 1 [11264/48750]	Loss: 1.5198	LR: 0.022513
Training Epoch: 1 [11520/48750]	Loss: 1.5841	LR: 0.023037
Training Epoch: 1 [11776/48750]	Loss: 1.5383	LR: 0.023560
Training Epoch: 1 [12032/48750]	Loss: 1.6887	LR: 0.024084
Training Epoch: 1 [12288/48750]	Loss: 1.6903	LR: 0.024607
Training Epoch: 1 [12544/48750]	Loss: 1.6217	LR: 0.025131
Training Epoch: 1 [12800/48750]	Loss: 1.7161	LR: 0.025654
Training Epoch: 1 [13056/48750]	Loss: 1.6090	LR: 0.026178
Training Epoch: 1 [13312/48750]	Loss: 1.7395	LR: 0.026702
Training Epoch: 1 [13568/48750]	Loss: 1.5688	LR: 0.027225
Training Epoch: 1 [13824/48750]	Loss: 1.7404	LR: 0.027749
Training Epoch: 1 [14080/48750]	Loss: 1.5536	LR: 0.028272
Training Epoch: 1 [14336/48750]	Loss: 1.6322	LR: 0.028796
Training Epoch: 1 [14592/48750]	Loss: 1.6992	LR: 0.029319
Training Epoch: 1 [14848/48750]	Loss: 1.6610	LR: 0.029843
Training Epoch: 1 [15104/48750]	Loss: 1.7247	LR: 0.030366
Training Epoch: 1 [15360/48750]	Loss: 1.4818	LR: 0.030890
Training Epoch: 1 [15616/48750]	Loss: 1.6418	LR: 0.031414
Training Epoch: 1 [15872/48750]	Loss: 1.6744	LR: 0.031937
Training Epoch: 1 [16128/48750]	Loss: 1.6753	LR: 0.032461
Training Epoch: 1 [16384/48750]	Loss: 1.4918	LR: 0.032984
Training Epoch: 1 [16640/48750]	Loss: 1.6103	LR: 0.033508
Training Epoch: 1 [16896/48750]	Loss: 1.6248	LR: 0.034031
Training Epoch: 1 [17152/48750]	Loss: 1.5439	LR: 0.034555
Training Epoch: 1 [17408/48750]	Loss: 1.7211	LR: 0.035079
Training Epoch: 1 [17664/48750]	Loss: 1.4940	LR: 0.035602
Training Epoch: 1 [17920/48750]	Loss: 1.5186	LR: 0.036126
Training Epoch: 1 [18176/48750]	Loss: 1.4956	LR: 0.036649
Training Epoch: 1 [18432/48750]	Loss: 1.5023	LR: 0.037173
Training Epoch: 1 [18688/48750]	Loss: 1.7239	LR: 0.037696
Training Epoch: 1 [18944/48750]	Loss: 1.4919	LR: 0.038220
Training Epoch: 1 [19200/48750]	Loss: 1.6070	LR: 0.038743
Training Epoch: 1 [19456/48750]	Loss: 1.3534	LR: 0.039267
Training Epoch: 1 [19712/48750]	Loss: 1.6358	LR: 0.039791
Training Epoch: 1 [19968/48750]	Loss: 1.6902	LR: 0.040314
Training Epoch: 1 [20224/48750]	Loss: 1.5517	LR: 0.040838
Training Epoch: 1 [20480/48750]	Loss: 1.5469	LR: 0.041361
Training Epoch: 1 [20736/48750]	Loss: 1.5145	LR: 0.041885
Training Epoch: 1 [20992/48750]	Loss: 1.5775	LR: 0.042408
Training Epoch: 1 [21248/48750]	Loss: 1.3814	LR: 0.042932
Training Epoch: 1 [21504/48750]	Loss: 1.5034	LR: 0.043455
Training Epoch: 1 [21760/48750]	Loss: 1.4373	LR: 0.043979
Training Epoch: 1 [22016/48750]	Loss: 1.6547	LR: 0.044503
Training Epoch: 1 [22272/48750]	Loss: 1.5396	LR: 0.045026
Training Epoch: 1 [22528/48750]	Loss: 1.5314	LR: 0.045550
Training Epoch: 1 [22784/48750]	Loss: 1.3672	LR: 0.046073
Training Epoch: 1 [23040/48750]	Loss: 1.5034	LR: 0.046597
Training Epoch: 1 [23296/48750]	Loss: 1.2910	LR: 0.047120
Training Epoch: 1 [23552/48750]	Loss: 1.5166	LR: 0.047644
Training Epoch: 1 [23808/48750]	Loss: 1.5855	LR: 0.048168
Training Epoch: 1 [24064/48750]	Loss: 1.3303	LR: 0.048691
Training Epoch: 1 [24320/48750]	Loss: 1.3362	LR: 0.049215
Training Epoch: 1 [24576/48750]	Loss: 1.3626	LR: 0.049738
Training Epoch: 1 [24832/48750]	Loss: 1.3428	LR: 0.050262
Training Epoch: 1 [25088/48750]	Loss: 1.4232	LR: 0.050785
Training Epoch: 1 [25344/48750]	Loss: 1.4141	LR: 0.051309
Training Epoch: 1 [25600/48750]	Loss: 1.3659	LR: 0.051832
Training Epoch: 1 [25856/48750]	Loss: 1.2943	LR: 0.052356
Training Epoch: 1 [26112/48750]	Loss: 1.5583	LR: 0.052880
Training Epoch: 1 [26368/48750]	Loss: 1.2978	LR: 0.053403
Training Epoch: 1 [26624/48750]	Loss: 1.3265	LR: 0.053927
Training Epoch: 1 [26880/48750]	Loss: 1.3085	LR: 0.054450
Training Epoch: 1 [27136/48750]	Loss: 1.4170	LR: 0.054974
Training Epoch: 1 [27392/48750]	Loss: 1.2664	LR: 0.055497
Training Epoch: 1 [27648/48750]	Loss: 1.2575	LR: 0.056021
Training Epoch: 1 [27904/48750]	Loss: 1.3505	LR: 0.056545
Training Epoch: 1 [28160/48750]	Loss: 1.5093	LR: 0.057068
Training Epoch: 1 [28416/48750]	Loss: 1.3762	LR: 0.057592
Training Epoch: 1 [28672/48750]	Loss: 1.2259	LR: 0.058115
Training Epoch: 1 [28928/48750]	Loss: 1.1637	LR: 0.058639
Training Epoch: 1 [29184/48750]	Loss: 1.2873	LR: 0.059162
Training Epoch: 1 [29440/48750]	Loss: 1.5144	LR: 0.059686
Training Epoch: 1 [29696/48750]	Loss: 1.3729	LR: 0.060209
Training Epoch: 1 [29952/48750]	Loss: 1.4298	LR: 0.060733
Training Epoch: 1 [30208/48750]	Loss: 1.4641	LR: 0.061257
Training Epoch: 1 [30464/48750]	Loss: 1.1833	LR: 0.061780
Training Epoch: 1 [30720/48750]	Loss: 1.3795	LR: 0.062304
Training Epoch: 1 [30976/48750]	Loss: 1.1349	LR: 0.062827
Training Epoch: 1 [31232/48750]	Loss: 1.4466	LR: 0.063351
Training Epoch: 1 [31488/48750]	Loss: 1.2332	LR: 0.063874
Training Epoch: 1 [31744/48750]	Loss: 1.5579	LR: 0.064398
Training Epoch: 1 [32000/48750]	Loss: 1.3986	LR: 0.064921
Training Epoch: 1 [32256/48750]	Loss: 1.4030	LR: 0.065445
Training Epoch: 1 [32512/48750]	Loss: 1.4621	LR: 0.065969
Training Epoch: 1 [32768/48750]	Loss: 1.3458	LR: 0.066492
Training Epoch: 1 [33024/48750]	Loss: 1.3272	LR: 0.067016
Training Epoch: 1 [33280/48750]	Loss: 1.3311	LR: 0.067539
Training Epoch: 1 [33536/48750]	Loss: 1.1759	LR: 0.068063
Training Epoch: 1 [33792/48750]	Loss: 1.4116	LR: 0.068586
Training Epoch: 1 [34048/48750]	Loss: 1.3661	LR: 0.069110
Training Epoch: 1 [34304/48750]	Loss: 1.2164	LR: 0.069634
Training Epoch: 1 [34560/48750]	Loss: 1.2707	LR: 0.070157
Training Epoch: 1 [34816/48750]	Loss: 1.4611	LR: 0.070681
Training Epoch: 1 [35072/48750]	Loss: 1.2014	LR: 0.071204
Training Epoch: 1 [35328/48750]	Loss: 1.2910	LR: 0.071728
Training Epoch: 1 [35584/48750]	Loss: 1.2682	LR: 0.072251
Training Epoch: 1 [35840/48750]	Loss: 1.2184	LR: 0.072775
Training Epoch: 1 [36096/48750]	Loss: 1.5015	LR: 0.073298
Training Epoch: 1 [36352/48750]	Loss: 1.3573	LR: 0.073822
Training Epoch: 1 [36608/48750]	Loss: 1.3883	LR: 0.074346
Training Epoch: 1 [36864/48750]	Loss: 1.5082	LR: 0.074869
Training Epoch: 1 [37120/48750]	Loss: 1.4465	LR: 0.075393
Training Epoch: 1 [37376/48750]	Loss: 1.2282	LR: 0.075916
Training Epoch: 1 [37632/48750]	Loss: 1.7153	LR: 0.076440
Training Epoch: 1 [37888/48750]	Loss: 1.2591	LR: 0.076963
Training Epoch: 1 [38144/48750]	Loss: 1.5605	LR: 0.077487
Training Epoch: 1 [38400/48750]	Loss: 1.4322	LR: 0.078010
Training Epoch: 1 [38656/48750]	Loss: 1.3379	LR: 0.078534
Training Epoch: 1 [38912/48750]	Loss: 1.4747	LR: 0.079058
Training Epoch: 1 [39168/48750]	Loss: 1.3387	LR: 0.079581
Training Epoch: 1 [39424/48750]	Loss: 1.6701	LR: 0.080105
Training Epoch: 1 [39680/48750]	Loss: 1.2729	LR: 0.080628
Training Epoch: 1 [39936/48750]	Loss: 1.1478	LR: 0.081152
Training Epoch: 1 [40192/48750]	Loss: 1.3902	LR: 0.081675
Training Epoch: 1 [40448/48750]	Loss: 1.4461	LR: 0.082199
Training Epoch: 1 [40704/48750]	Loss: 1.3249	LR: 0.082723
Training Epoch: 1 [40960/48750]	Loss: 1.2903	LR: 0.083246
Training Epoch: 1 [41216/48750]	Loss: 1.3221	LR: 0.083770
Training Epoch: 1 [41472/48750]	Loss: 1.3271	LR: 0.084293
Training Epoch: 1 [41728/48750]	Loss: 1.2886	LR: 0.084817
Training Epoch: 1 [41984/48750]	Loss: 1.3594	LR: 0.085340
Training Epoch: 1 [42240/48750]	Loss: 1.2050	LR: 0.085864
Training Epoch: 1 [42496/48750]	Loss: 1.2725	LR: 0.086387
Training Epoch: 1 [42752/48750]	Loss: 1.3312	LR: 0.086911
Training Epoch: 1 [43008/48750]	Loss: 1.2217	LR: 0.087435
Training Epoch: 1 [43264/48750]	Loss: 1.2774	LR: 0.087958
Training Epoch: 1 [43520/48750]	Loss: 1.3667	LR: 0.088482
Training Epoch: 1 [43776/48750]	Loss: 1.3142	LR: 0.089005
Training Epoch: 1 [44032/48750]	Loss: 1.1900	LR: 0.089529
Training Epoch: 1 [44288/48750]	Loss: 1.3323	LR: 0.090052
Training Epoch: 1 [44544/48750]	Loss: 1.0742	LR: 0.090576
Training Epoch: 1 [44800/48750]	Loss: 1.1744	LR: 0.091099
Training Epoch: 1 [45056/48750]	Loss: 1.2531	LR: 0.091623
Training Epoch: 1 [45312/48750]	Loss: 1.3714	LR: 0.092147
Training Epoch: 1 [45568/48750]	Loss: 1.2343	LR: 0.092670
Training Epoch: 1 [45824/48750]	Loss: 1.2215	LR: 0.093194
Training Epoch: 1 [46080/48750]	Loss: 1.2785	LR: 0.093717
Training Epoch: 1 [46336/48750]	Loss: 1.1344	LR: 0.094241
Training Epoch: 1 [46592/48750]	Loss: 1.3528	LR: 0.094764
Training Epoch: 1 [46848/48750]	Loss: 1.1853	LR: 0.095288
Training Epoch: 1 [47104/48750]	Loss: 1.1994	LR: 0.095812
Training Epoch: 1 [47360/48750]	Loss: 1.2457	LR: 0.096335
Training Epoch: 1 [47616/48750]	Loss: 1.3351	LR: 0.096859
Training Epoch: 1 [47872/48750]	Loss: 1.1118	LR: 0.097382
Training Epoch: 1 [48128/48750]	Loss: 1.3260	LR: 0.097906
Training Epoch: 1 [48384/48750]	Loss: 1.2283	LR: 0.098429
Training Epoch: 1 [48640/48750]	Loss: 1.2297	LR: 0.098953
Training Epoch: 1 [48750/48750]	Loss: 1.2995	LR: 0.099476
Epoch 1 - Average Train Loss: 1.5257, Train Accuracy: 0.4491
Epoch 1 training time consumed: 19.10s
Evaluating Network.....
Test set: Epoch: 1, Average loss: 0.0086, Accuracy: 0.4435, Time consumed:0.95s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_02_August_2025_19h_59m_29s/ResNet18-Cifar10-seed2-ret75-1-best.pth
Valid (Test) Dl:  10000
Train Dl:  50000
Retain Train Dl:  48750
Forget Train Dl:  1250
Retain Valid Dl:  48750
Forget Valid Dl:  1250
retain_prob Distribution: 10000 samples
test_prob Distribution: 10000 samples
forget_prob Distribution: 1250 samples
Set1 Distribution: 1250 samples
Set2 Distribution: 1250 samples
Set1 Distribution: 1250 samples
Set2 Distribution: 1250 samples
Set1 Distribution: 10000 samples
Set2 Distribution: 10000 samples
Set1 Distribution: 10000 samples
Set2 Distribution: 10000 samples
Test Accuracy: 43.896484375
Retain Accuracy: 44.662174224853516
Zero-Retain Forget (ZRF): 0.8846220970153809
Membership Inference Attack (MIA): 0.3656
Forget vs Retain Membership Inference Attack (MIA): 0.552
Forget vs Test Membership Inference Attack (MIA): 0.514
Test vs Retain Membership Inference Attack (MIA): 0.63125
Train vs Test Membership Inference Attack (MIA): 0.49525
Forget Set Accuracy (Df): 41.08061599731445
Method Execution Time: 915.20 seconds
